2 research outputs found

    Aggregate production planning: A literature review and future research directions

    Get PDF
    Aggregate production planning (APP) is concerned with determining the optimum production and workforce levels for each period over the medium term planning horizon. It aims to set overall production levels for each product family to meet fluctuating demand in the near future. APP is one of the most critical areas of production planning systems. After the state-of-the-art summaries in 1992 by Nam and Logendran [ Nam, S. J., & Logendran, R. (1992). Aggregate production planning—a survey of models and methodologies. European Journal of Operational Research, 61(3), 255-272. ], which specifically summarized the various existing techniques from 1950 to 1990 into a framework depending on their abilities to either produce an exact optimal or near-optimal solution, there has not been any systematic survey in the literature. This paper reviews the literature on APP models to meet two main purposes. First, a systematic structure for classifying APP models is proposed. Second, the existing gaps in the literature are demonstrated in order to extract future directions of this research area. This paper covers a variety of APP models’ characteristics including modeling structures, important issues, and solving approaches, in contrast to other literature reviews in this field which focused on methodologies in APP models. Finally some directions for future research in this research area are suggested

    TFS-ViT: Token-Level Feature Stylization for Domain Generalization

    Full text link
    Standard deep learning models such as convolutional neural networks (CNNs) lack the ability of generalizing to domains which have not been seen during training. This problem is mainly due to the common but often wrong assumption of such models that the source and target data come from the same i.i.d. distribution. Recently, Vision Transformers (ViTs) have shown outstanding performance for a broad range of computer vision tasks. However, very few studies have investigated their ability to generalize to new domains. This paper presents a first Token-level Feature Stylization (TFS-ViT) approach for domain generalization, which improves the performance of ViTs to unseen data by synthesizing new domains. Our approach transforms token features by mixing the normalization statistics of images from different domains. We further improve this approach with a novel strategy for attention-aware stylization, which uses the attention maps of class (CLS) tokens to compute and mix normalization statistics of tokens corresponding to different image regions. The proposed method is flexible to the choice of backbone model and can be easily applied to any ViT-based architecture with a negligible increase in computational complexity. Comprehensive experiments show that our approach is able to achieve state-of-the-art performance on five challenging benchmarks for domain generalization, and demonstrate its ability to deal with different types of domain shifts. The implementation is available at: https://github.com/Mehrdad-Noori/TFS-ViT_Token-level_Feature_Stylization
    corecore